173 research outputs found

    Factors affecting the perception of transparent motion

    Get PDF
    It is possible to create a perception of transparency by combining patterns having different motions. Two particular combination rules, have specific interpretations in terms of physical phenomena: additive (specular reflection) and multiplicative (shadow illumination). Arbitrary combination rules applied to random patterns generate percepts in which the motions of the two patterns are visible, but have super-imposed noise. It is also possible to combine the patterns (using an exclusive-OR rule) so that only noise is visible. Within a one-dimensional family of combination rules which include addition and multiplication, there is a range where smooth motions are seen with no superimposed noise; this range is centered about the additive combination. This result suggests that the motion system deals with a linear representation of luminance, and is consistent with the analysis of motion by linear sensors. This research gives tentative validation the use in beam splitters (which combine images additively) in the construction of heads-up aviation displays. Further work is needed to determine if the superiority of additive combination generalizes to the case of full-color imagery (there are results in the literature suggesting that subtractive color mixture yields the best legibility of overlapping alphanumerics)

    Vision Science and Technology at NASA: Results of a Workshop

    Get PDF
    A broad review is given of vision science and technology within NASA. The subject is defined and its applications in both NASA and the nation at large are noted. A survey of current NASA efforts is given, noting strengths and weaknesses of the NASA program

    Efficient use of bit planes in the generation of motion stimuli

    Get PDF
    The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed

    Factors Influencing Scanning for Alerts

    Get PDF
    Aircraft pilots (like operators in many domains) are required to monitor locations for rare events, while concurrently performing their everyday tasks. In many cases, the visual parameters of the alert are such that it is not visible unless directly fixated. For this reason, critical alerts should be designed to be visible in peripheral vision and/or augmented by an audio alarm. We use the term "conspicuity" to distinguish the attention-getting power of a visual stimulus from simple visibility in a single-task context. We have measured conspicuity in an experimental paradigm designed to test the N-SEEV model of attention and noticing (Steelman-Allen et al., HFES 2009). The subject performed a demanding central task while monitoring four peripheral locations for color change events. Visibility of the alerting stimuli was measured separately in a control experiment in which the subject maintained steady fixation without the central task. Thresholds in the dual-task experiments were lower than would be expected based on the results of the control experiment, due to the fact that the subjects actively sampled the alert locations with fixations while performing the central task. Locations of high-frequency alerts are generally sampled more often than locations of low-frequency alerts, and alert location sampling in general increases with practice, presumably because the demands of the central task are reduced

    Discovery of Activities via Statistical Clustering of Fixation Patterns

    Get PDF
    Human behavior often consists of a series of distinct activities, each characterized by a unique signature of visual behavior. This is true even in a restricted domain, such as piloting an aircraft, where patterns of visual signatures might represent activities like communicating, navigating, and monitoring. We propose a novel analysis method for gaze-tracking data, to perform blind discovery of these activities based on their behavioral signatures. The method is in some respects similar to recurrence analysis, but here we compare not individual fixations, but groups of fixations aggregated over a fixed time interval. The duration of this interval is a parameter that we will refer to as . We assume that the environment has been divided into a set of N different areas-of-interest (AOIs). For a given interval of time of duration , we compute the proportion of time spent fixating each AOI, resulting in an N-dimensional vector. These proportions can be converted to counts by multiplying by divided by the average fixation duration (another parameter that we fix at 280 milliseconds). We compare different intervals by computing the chi-square statistic. The p-value associated with the statistic is the likelihood of observing the data under the hypothesis that the data in the two intervals were generated by a single process with a single set of probabilities governing the fixation of each AOI. We have investigated the method using a set of 10 synthetic "activities," that sample 4 AOIs. Four of these activities visit 3 of the 4 AOIs, with equal probability; as there are four different ways to leave-one- out, there are four such activities. Similarly, there are six different activities that leave-two-out. Sequences of simulated behavior were generated by running each activity for 40 seconds, in sequence, for a total of 6.7 minutes. The figure to the right shows the matrix of chi-square statistics, using a value of 2.8 seconds for , corresponding to 10 fixations. Low values (dark) indicate poor evidence for activity differences, while high values (bright) indicate strong evidence. The dark squares along the main diagonal each correspond to the forty second intervals in which the activity was held constant; the 4x4 block at the lower left corresponds to the four leave-one-out activities, while the 6x6 block in the upper right corresponds to the leave-two-out activities. (The anti-diagonal pattern of white squares indicates those activity pairs that share no AOIs.) The chi-square values can be binarized by choosing a particular significance level; we are interested in grouping bins that represent the same activity, effectively accepting the null hypothesis. Therefore, we may adopt a relatively lax criterion; for example, choosing a p-value of 0.2 means that two behaviors that have only a 1-in-5 chance of being produced by a single activity might nevertheless be clustered together. We have explored several methods to perform clustering on the data and solving for the activity probabilities. Greedy methods begin by selecting the time bin that is similar to the most (or least) other bins, and then forming a cluster from it and all other non-discriminable bins. These methods show mediocre performance, as they do not take into account temporal contiguity. Preliminary results indicate that methods that "grow" clusters in time from seed points perform better

    Rapid Assessment of Contrast Sensitivity with Mobile Touch-screens

    Get PDF
    The availability of low-cost high-quality touch-screen displays in modern mobile devices has created opportunities for new approaches to routine visual measurements. Here we describe a novel method in which subjects use a finger swipe to indicate the transition from visible to invisible on a grating which is swept in both contrast and frequency. Because a single image can be swiped in about a second, it is practical to use a series of images to zoom in on particular ranges of contrast or frequency, both to increase the accuracy of the measurements and to obtain an estimate of the reliability of the subject. Sensitivities to chromatic and spatio-temporal modulations are easily measured using the same method. We will demonstrate a prototype for Apple Computer's iPad-iPod-iPhone family of devices, implemented using an open-source scripting environment known as QuIP (QUick Image Processing

    Psychophysical Calibration of Mobile Touch-Screens for Vision Testing in the Field

    Get PDF
    The now ubiquitous nature of touch-screen displays in cell phones and tablet computers makes them an attractive option for vision testing outside of the laboratory or clinic. Accurate measurement of parameters such as contrast sensitivity, however, requires precise control of absolute and relative screen luminances. The nonlinearity of the display response (gamma) can be measured or checked using a minimum motion technique similar to that developed by Anstis and Cavanagh (1983) for the determination of isoluminance. While the relative luminances of the color primaries vary between subjects (due to factors such as individual differences in pre-retinal pigment densities), the gamma nonlinearity can be checked in the lab using a photometer. Here we compare results obtained using the psychophysical method with physical measurements for a number of different devices. In addition, we present a novel physical method using the device's built-in front-facing camera in conjunction with a mirror to jointly calibrate the camera and display. A high degree of consistency between devices is found, but some departures from ideal performance are observed. In spite of this, the effects of calibration errors and display artifacts on estimates of contrast sensitivity are found to be small

    Assessing Visual Delays using Pupil Oscillations

    Get PDF
    Stark (1962) demonstrated vigorous pupil oscillations by illuminating the retina with a beam of light focussed to a small spot near the edge of the pupil. Small constrictions of the pupil then are sufficient to completely block the beam, amplifying the normal relationship between pupil area and retinal illuminance. In addition to this simple and elegant method, Stark also investigated more complex feedback systems using an electronic "clamping box" which provided arbitrary gain and phase delay between a measurement of pupil area and an electronically controlled light source. We have replicated Stark's results using a video-based pupillometer to control the luminance of a display monitor. Pupil oscillations were induced by imposing a linear relationship between pupil area and display luminance, with a variable delay. Slopes of the period-vs-delay function for 3 subjects are close to the predicted value of 2 (1.96-2.39), and the implied delays range from 254 to 376 508 to 652 milliseconds. Our setup allows us to extend Stark's work by investigating a broader class of stimuli

    Evaluation of Tablet-Based Methods for Assessment of Contrast Sensitivity

    Get PDF
    Three psychophysical methods for measurement of visual contrast sensitivity on a tablet computer were evaluated and compared. A novel rapid method involving "swiping" a test image produced results comparable to more traditional methods in a fraction of the time

    Measuring and Modeling Shared Visual Attention

    Get PDF
    Multi-person teams are sometimes responsible for critical tasks, such as flying an airliner. Here we present a method using gaze tracking data to assess shared visual attention, a term we use to describe the situation where team members are attending to a common set of elements in the environment. Gaze data are quantized with respect to a set of N areas of interest (AOIs); these are then used to construct a time series of N dimensional vectors, with each vector component representing one of the AOIs, all set to 0 except for the component corresponding to the currently fixated AOI, which is set to 1. The resulting sequence of vectors can be averaged in time, with the result that each vector component represents the proportion of time that the corresponding AOI was fixated within the given time interval. We present two methods for comparing sequences of this sort, one based on computing the time-varying correlation of the averaged vectors, and another based on a chi-square test testing the hypothesis that the observed gaze proportions are drawn from identical probability distributions
    corecore